407 research outputs found

    An object-based image analysis approach for detecting urban impervious surfaces

    Get PDF
    Impervious surfaces are manmade surfaces which are highly resistant to infiltration of water. Previous attempts to classify impervious surfaces from high spatial resolution imagery with pixel-based techniques have proven to be unsuitable for automated classification because of its high spectral variability and complex land covers in urban areas. Accurate and rapid classification of impervious surfaces would help in emergency management after extreme events like flooding, earthquakes, fires, tsunami, and hurricanes, by providing quick estimates and updated maps for emergency response. The objectives of this study were to: (1) compare classification accuracy between pixel-based and OBIA methods, (2) examine whether the object-based image analysis (OBIA) could better detect urban impervious surfaces, and (3) develop an automated, generalized OBIA classification method for impervious surfaces. This study analyzed urban impervious surfaces using a 1-meter spatial resolution, four band Digital Orthophoto Quarter Quad (DOQQ) aerial imagery of downtown New Orleans, Louisiana taken as part of post Hurricane Katrina and Rita dataset. The study compared the traditional pixel-based classification with four variations of the rule-based OBIA approach for classification accuracy. A four-class classification scheme was used for the analysis, including impervious surfaces, vegetation, shadow, and water. The results show that OBIA accuracy ranges from 85.33% through 91.41% compared with 80.67% classification accuracy from using the pixel-based approach. OBIA rule-based method 4 utilizing a multi-resolution segmentation approach and derived spectral indices such as Normalized Difference Vegetation Index (NDVI), Normalized Difference Water Index (NDWI), and the Spectral Shape Index (SSI) was the best method, yielding a 91.41% classification accuracy. OBIA rule-based method 4 can be automated and generalized for multiple study areas. A test of the segmentation parameters show that parameter values of scale ≤ 20, color/shape ranging from 0.1 - 0.3, and compactness/smoothness ranging from 0.4 - 0.6 yielded the highest classification accuracies. These results show that the developed OBIA method was accurate, generalizable, and capable of automation for the classification of urban impervious surfaces

    Evaluation of the impacts of Hurricane Hugo on the land cover of Francis Marion National Forest, South Carolina using remote sensing

    Get PDF
    Hurricane Hugo struck the South Carolina coast on the night of September 21, 1989 at Sullivan’s Island, where it was considered a Category 4 on the Saffir-Simpson scale when the hurricane made landfall (Hook et al. 1991). It is probably amongst the most studied and documented hurricanes in the United States (USDA Southern Research Station Publication 1996). There has been a Landsat TM based Hugo damage assessment study conducted by Cablk et al. (1994) in the Hobcaw barony forest. This study attempted to assess for a different and smaller study area near the Wambaw and Coffee creek swamp. The main objective of this study was to compare the results of the traditional post-classification method and the triangular prism fractal method (TPSA hereafter, a spatial method) for change detection using Landsat TM data for the Francis Marion National Forest (FMNF hereafter) before and after Hurricane Hugo’s landfall (in 1987 and 1989). Additional methods considered for comparison were the principal component analysis (PCA hereafter), and tasseled cap transform (TCT hereafter). Classification accuracy was estimated at 81.44% and 85.71% for the hurricane images with 4 classes: water, woody wetland, forest and a combined cultivated row crops/transitional barren class. Post-classification was successful in identifying the Wambaw swamp, Coffee creek swamp, and the Little Wambaw wilderness as having a gain in homogeneity. It was the only method along with the local fractal method, which gave the percentage of changed land cover areas. Visual comparison of the PCA and TCT images show the dominant land cover changes in the study area with the TCT in general better able to identify the features in all their transformed three bands. The post-classification method, PCA, and the TCT brightness and greenness bands did not report increase in heterogeneity, but were successful in reporting gain in homogeneity. The local fractal TPSA method of a 17x17 moving window with five arithmetic steps was found to have the best visual representation of the textural patterns in the study area. The local fractal TPSA method was successful in identifying land cover areas as having the largest heterogeneity increase (a positive change in fractal dimension difference values) and largest homogeneity increase (a negative change in fractal dimension difference values). The woody wetland class was found to have the biggest increase in homogeneity and the forest class as having the biggest increase in heterogeneity, in addition to identifying the three swamp areas as having an overall increased homogeneity

    FPGA structures for high speed and low overhead dynamic circuit specialization

    Get PDF
    A Field Programmable Gate Array (FPGA) is a programmable digital electronic chip. The FPGA does not come with a predefined function from the manufacturer; instead, the developer has to define its function through implementing a digital circuit on the FPGA resources. The functionality of the FPGA can be reprogrammed as desired and hence the name “field programmable”. FPGAs are useful in small volume digital electronic products as the design of a digital custom chip is expensive. Changing the FPGA (also called configuring it) is done by changing the configuration data (in the form of bitstreams) that defines the FPGA functionality. These bitstreams are stored in a memory of the FPGA called configuration memory. The SRAM cells of LookUp Tables (LUTs), Block Random Access Memories (BRAMs) and DSP blocks together form the configuration memory of an FPGA. The configuration data can be modified according to the user’s needs to implement the user-defined hardware. The simplest way to program the configuration memory is to download the bitstreams using a JTAG interface. However, modern techniques such as Partial Reconfiguration (PR) enable us to configure a part in the configuration memory with partial bitstreams during run-time. The reconfiguration is achieved by swapping in partial bitstreams into the configuration memory via a configuration interface called Internal Configuration Access Port (ICAP). The ICAP is a hardware primitive (macro) present in the FPGA used to access the configuration memory internally by an embedded processor. The reconfiguration technique adds flexibility to use specialized ci rcuits that are more compact and more efficient t han t heir b ulky c ounterparts. An example of such an implementation is the use of specialized multipliers instead of big generic multipliers in an FIR implementation with constant coefficients. To specialize these circuits and reconfigure during the run-time, researchers at the HES group proposed the novel technique called parameterized reconfiguration that can be used to efficiently and automatically implement Dynamic Circuit Specialization (DCS) that is built on top of the Partial Reconfiguration method. It uses the run-time reconfiguration technique that is tailored to implement a parameterized design. An application is said to be parameterized if some of its input values change much less frequently than the rest. These inputs are called parameters. Instead of implementing these parameters as regular inputs, in DCS these inputs are implemented as constants, and the application is optimized for the constants. For every change in parameter values, the design is re-optimized (specialized) during run-time and implemented by reconfiguring the optimized design for a new set of parameters. In DCS, the bitstreams of the parameterized design are expressed as Boolean functions of the parameters. For every infrequent change in parameters, a specialized FPGA configuration is generated by evaluating the corresponding Boolean functions, and the FPGA is reconfigured with the specialized configuration. A detailed study of overheads of DCS and providing suitable solutions with appropriate custom FPGA structures is the primary goal of the dissertation. I also suggest different improvements to the FPGA configuration memory architecture. After offering the custom FPGA structures, I investigated the role of DCS on FPGA overlays and the use of custom FPGA structures that help to reduce the overheads of DCS on FPGA overlays. By doing so, I hope I can convince the developer to use DCS (which now comes with minimal costs) in real-world applications. I start the investigations of overheads of DCS by implementing an adaptive FIR filter (using the DCS technique) on three different Xilinx FPGA platforms: Virtex-II Pro, Virtex-5, and Zynq-SoC. The study of how DCS behaves and what is its overhead in the evolution of the three FPGA platforms is the non-trivial basis to discover the costs of DCS. After that, I propose custom FPGA structures (reconfiguration controllers and reconfiguration drivers) to reduce the main overhead (reconfiguration time) of DCS. These structures not only reduce the reconfiguration time but also help curbing the power hungry part of the DCS system. After these chapters, I study the role of DCS on FPGA overlays. I investigate the effect of the proposed FPGA structures on Virtual-Coarse-Grained Reconfigurable Arrays (VCGRAs). I classify the VCGRA implementations into three types: the conventional VCGRA, partially parameterized VCGRA and fully parameterized VCGRA depending upon the level of parameterization. I have designed two variants of VCGRA grids for HPC image processing applications, namely, the MAC grid and Pixie. Finally, I try to tackle the reconfiguration time overhead at the hardware level of the FPGA by customizing the FPGA configuration memory architecture. In this part of my research, I propose to use a parallel memory structure to improve the reconfiguration time of DCS drastically. However, this improvement comes with a significant overhead of hardware resources which will need to be solved in future research on commercial FPGA configuration memory architectures

    Using Optimality Theory and Reference Points to Improve the Diversity and Convergence of a Fuzzy-Adaptive Multi-Objective Particle Swarm Optimizer

    Get PDF
    Particle Swarm Optimization (PSO) has received increasing attention from the evolutionary optimization research community in the last twenty years. PSO is a metaheuristic approach based on collective intelligence obtained by emulating the swarming behavior of bees. A number of multi-objective variants of the original PSO algorithm that extend its applicability to optimization problems with conflicting objectives have also been developed; these multi-objective PSO (MOPSO) algorithms demonstrate comparable performance to other state-of-the-art metaheuristics. The existence of multiple optimal solutions (Pareto-optimal set) in optimization problems with conflicting objectives is not the only challenge posed to an optimizer, as the latter needs to be able to identify and preserve a well-distributed set of solutions during the search of the decision variable space. Recent attempts by evolutionary optimization researchers to incorporate mathematical convergence conditions into genetic algorithm optimizers have led to the derivation of a point-wise proximity measure, which is based on the solution of the achievement scalarizing function (ASF) optimization problem with a complementary slackness condition that quantifies the violation of the Karush-Kuhn-Tucker necessary conditions of optimality. In this work, the aforementioned KKT proximity measure is incorporated into the original Adaptive Coevolutionary Multi-Objective Swarm Optimizer (ACMOPSO) in order to monitor the convergence of the sub-swarms towards the Pareto-optimal front and provide feedback to Mamdani-type fuzzy logic controllers (FLCs) that are utilized for online adaptation of the algorithmic parameters. The proposed Fuzzy-Adaptive Multi-Objective Optimization Algorithm with the KKT proximity measure (FAMOPSOkkt) utilizes a set of reference points to cluster the computed nondominated solutions. These clusters interact with their corresponding sub-swarms to provide the swarm leaders and are also utilized to manage the external archive of nondominated solutions. The performance of the proposed algorithm is evaluated on benchmark problems chosen from the multi-objective optimization literature and compared to the performance of state-of-the-art multi-objective optimization algorithms with similar features

    Microgel-based surface modifying system for stimuli-responsive functional finishing of cotton

    Get PDF
    An innovative strategy for functional finishing of textile materials is based on the incorporation of a thin layer of surface modifying systems (SMS) in the form of stimuli-sensitive microgels or hydrogels. Since the copolymerization of poly(N-isopropylacrylamide) with an ionizable polymer, such as chitosan, results in a microgel that is responsive to both temperature and pH, the microparticulate hydrogel of poly-NiPAAmchitosan copolymer (PNCS) was synthesized using surfactant-free emulsion method. The microparticle size in dry (collapsed) state is estimated at 200nm by SEM and TEM, and effect of temperature and pH on microparticles was investigated by DLS and UV–vis spectrophotometry. The incorporation of PNCS microparticles to cotton material was done by a simple pad-dry-cure procedure from aqueous microparticle dispersion that contained 1,2,3,4-butanetetracarboxylic acid (BTCA) as a crosslinking agent. This application method provided sufficient integrity to coating by maintaining the responsiveness of surface modifying system. The stimuli-responsiveness of modified cotton fabric has been confirmed in terms of regulating its water uptake in dependence of pH and temperature

    Pixie: A heterogeneous Virtual Coarse-Grained Reconfigurable Array for high performance image processing applications

    Full text link
    Coarse-Grained Reconfigurable Arrays (CGRAs) enable ease of programmability and result in low development costs. They enable the ease of use specifically in reconfigurable computing applications. The smaller cost of compilation and reduced reconfiguration overhead enables them to become attractive platforms for accelerating high-performance computing applications such as image processing. The CGRAs are ASICs and therefore, expensive to produce. However, Field Programmable Gate Arrays (FPGAs) are relatively cheaper for low volume products but they are not so easily programmable. We combine best of both worlds by implementing a Virtual Coarse-Grained Reconfigurable Array (VCGRA) on FPGA. VCGRAs are a trade off between FPGA with large routing overheads and ASICs. In this perspective we present a novel heterogeneous Virtual Coarse-Grained Reconfigurable Array (VCGRA) called "Pixie" which is suitable for implementing high performance image processing applications. The proposed VCGRA contains generic processing elements and virtual channels that are described using the Hardware Description Language VHDL. Both elements have been optimized by using the parameterized configuration tool flow and result in a resource reduction of 24% for each processing elements and 82% for each virtual channels respectively.Comment: Presented at 3rd International Workshop on Overlay Architectures for FPGAs (OLAF 2017) arXiv:1704.0880

    How to efficiently reconfigure tunable lookup tables for dynamic circuit specialization

    Get PDF
    Dynamic Circuit Specialization is used to optimize the implementation of a parameterized application on an FPGA. Instead of implementing the parameters as regular inputs, in the DCS approach these inputs are implemented as constants. When the parameter values change, the design is reoptimized for the new constant values by reconfiguring the FPGA. This allows faster and more resource-efficient implementation but investigations have shown that reconfiguration time is the major limitation for DCS implementation on Xilinx FPGAs. The limitation arises from the use of inefficient reconfiguration methods in conventional DCS implementation. To address this issue, we propose different approaches to reduce the reconfiguration time drastically and improve the reconfiguration speed. In this context, this paper presents the use of custom reconfiguration controllers and custom reconfiguration software drivers, along with placement constraints to shorten the reconfiguration time. Our results show an improvement in the reconfiguration speed by at least a factor 14 by using Xilinx reconfiguration controller along with placement constraints. However, the improvement can go up to a factor 40 with the combination of a custom reconfiguration controller, custom software drivers, and placement constraints. We also observe depreciation in the system’s performance by at least 6% due to placement constraints
    corecore